185 research outputs found
Optimized kernel minimum noise fraction transformation for hyperspectral image classification
This paper presents an optimized kernel minimum noise fraction transformation (OKMNF) for feature extraction of hyperspectral imagery. The proposed approach is based on the kernel minimum noise fraction (KMNF) transformation, which is a nonlinear dimensionality reduction method. KMNF can map the original data into a higher dimensional feature space and provide a small number of quality features for classification and some other post processing. Noise estimation is an important component in KMNF. It is often estimated based on a strong relationship between adjacent pixels. However, hyperspectral images have limited spatial resolution and usually have a large number of mixed pixels, which make the spatial information less reliable for noise estimation. It is the main reason that KMNF generally shows unstable performance in feature extraction for classification. To overcome this problem, this paper exploits the use of a more accurate noise estimation method to improve KMNF. We propose two new noise estimation methods accurately. Moreover, we also propose a framework to improve noise estimation, where both spectral and spatial de-correlation are exploited. Experimental results, conducted using a variety of hyperspectral images, indicate that the proposed OKMNF is superior to some other related dimensionality reduction methods in most cases. Compared to the conventional KMNF, the proposed OKMNF benefits significant improvements in overall classification accuracy
Dual-Stage Approach Toward Hyperspectral Image Super-Resolution
Hyperspectral image produces high spectral resolution at the sacrifice of
spatial resolution. Without reducing the spectral resolution, improving the
resolution in the spatial domain is a very challenging problem. Motivated by
the discovery that hyperspectral image exhibits high similarity between
adjacent bands in a large spectral range, in this paper, we explore a new
structure for hyperspectral image super-resolution (DualSR), leading to a
dual-stage design, i.e., coarse stage and fine stage. In coarse stage, five
bands with high similarity in a certain spectral range are divided into three
groups, and the current band is guided to study the potential knowledge. Under
the action of alternative spectral fusion mechanism, the coarse SR image is
super-resolved in band-by-band. In order to build model from a global
perspective, an enhanced back-projection method via spectral angle constraint
is developed in fine stage to learn the content of spatial-spectral
consistency, dramatically improving the performance gain. Extensive experiments
demonstrate the effectiveness of the proposed coarse stage and fine stage.
Besides, our network produces state-of-the-art results against existing works
in terms of spatial reconstruction and spectral fidelity
Online Structured Sparsity-based Moving Object Detection from Satellite Videos
Inspired by the recent developments in computer vision, low-rank and
structured sparse matrix decomposition can be potentially be used for extract
moving objects in satellite videos. This set of approaches seeks for rank
minimization on the background that typically requires batch-based optimization
over a sequence of frames, which causes delays in processing and limits their
applications. To remedy this delay, we propose an Online Low-rank and
Structured Sparse Decomposition (O-LSD). O-LSD reformulates the batch-based
low-rank matrix decomposition with the structured sparse penalty to its
equivalent frame-wise separable counterpart, which then defines a stochastic
optimization problem for online subspace basis estimation. In order to promote
online processing, O-LSD conducts the foreground and background separation and
the subspace basis update alternatingly for every frame in a video. We also
show the convergence of O-LSD theoretically. Experimental results on two
satellite videos demonstrate the performance of O-LSD in term of accuracy and
time consumption is comparable with the batch-based approaches with
significantly reduced delay in processing
Guided Hybrid Quantization for Object detection in Multimodal Remote Sensing Imagery via One-to-one Self-teaching
Considering the computation complexity, we propose a Guided Hybrid
Quantization with One-to-one Self-Teaching (GHOST}) framework. More concretely,
we first design a structure called guided quantization self-distillation
(GQSD), which is an innovative idea for realizing lightweight through the
synergy of quantization and distillation. The training process of the
quantization model is guided by its full-precision model, which is time-saving
and cost-saving without preparing a huge pre-trained model in advance. Second,
we put forward a hybrid quantization (HQ) module to obtain the optimal bit
width automatically under a constrained condition where a threshold for
distribution distance between the center and samples is applied in the weight
value search space. Third, in order to improve information transformation, we
propose a one-to-one self-teaching (OST) module to give the student network a
ability of self-judgment. A switch control machine (SCM) builds a bridge
between the student network and teacher network in the same location to help
the teacher to reduce wrong guidance and impart vital knowledge to the student.
This distillation method allows a model to learn from itself and gain
substantial improvement without any additional supervision. Extensive
experiments on a multimodal dataset (VEDAI) and single-modality datasets (DOTA,
NWPU, and DIOR) show that object detection based on GHOST outperforms the
existing detectors. The tiny parameters (<9.7 MB) and Bit-Operations (BOPs)
(<2158 G) compared with any remote sensing-based, lightweight or
distillation-based algorithms demonstrate the superiority in the lightweight
design domain. Our code and model will be released at
https://github.com/icey-zhang/GHOST.Comment: This article has been delivered to TRGS and is under revie
Fusion of PCA and segmented-PCA domain multiscale 2-D-SSA for effective spectral-spatial feature extraction and data classification in hyperspectral imagery.
As hyperspectral imagery (HSI) contains rich spectral and spatial information, a novel principal component analysis (PCA) and segmented-PCA (SPCA)-based multiscale 2-D-singular spectrum analysis (2-D-SSA) fusion method is proposed for joint spectral–spatial HSI feature extraction and classification. Considering the overall spectra and adjacent band correlations of objects, the PCA and SPCA methods are utilized first for spectral dimension reduction, respectively. Then, multiscale 2-D-SSA is applied onto the SPCA dimension-reduced images to extract abundant spatial features at different scales, where PCA is applied again for dimensionality reduction. The obtained multiscale spatial features are then fused with the global spectral features derived from PCA to form multiscale spectral–spatial features (MSF-PCs). The performance of the extracted MSF-PCs is evaluated using the support vector machine (SVM) classifier. Experiments on four benchmark HSI data sets have shown that the proposed method outperforms other state-of-the-art feature extraction methods, including several deep learning approaches, when only a small number of training samples are available
- …